Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Cross-lingual zero-resource named entity recognition model based on sentence-level generative adversarial network
Xiaoyan ZHANG, Zhengyu DUAN
Journal of Computer Applications    2023, 43 (8): 2406-2411.   DOI: 10.11772/j.issn.1001-9081.2022071124
Abstract229)   HTML15)    PDF (963KB)(144)       Save

To address the problem of lack of labeled data in low-resource languages, which prevents the use of existing mature deep learning methods for Named Entity Recognition (NER), a cross-lingual NER model based on sentence-level Generative Adversarial Network (GAN), namely SLGAN-XLM-R (Sentence Level GAN based on XLM-R), was proposed. Firstly, the labeled data of the source language was used to train the NER model on the basis of the pre-trained model XLM-R (XLM-Robustly optimized BERT pretraining approach). At the same time, the linguistic adversarial training was performed on the embedding layer of XLM-R model by combining the unlabeled data of the target language. Then, the soft labels of the unlabeled data of the target language were predicted by using the NER model, Finally the labeled data of the source language and the target language was mixed to fine-tune the model again to obtain the final NER model. Experiments were conducted on four languages, English, German, Spanish, and Dutch, in two datasets, CoNLL2002 and CoNLL2003. The results show that with English as the source language, the F1 scores of SLGAN-XLM-R model on the test sets of German, Spanish, and Dutch are 72.70%, 79.42%, and 80.03%, respectively, which are 5.38, 5.38, and 3.05 percentage points higher compared to those of the direct fine-tuning on XLM-R model.

Table and Figures | Reference | Related Articles | Metrics
Comparison of three-way concepts under attribute clustering
Xiaoyan ZHANG, Jiayi WANG
Journal of Computer Applications    2023, 43 (5): 1336-1341.   DOI: 10.11772/j.issn.1001-9081.2022030399
Abstract209)   HTML18)    PDF (471KB)(127)       Save

Three-way concept analysis is a very important topic in the field of artificial intelligence. The biggest advantage of this theory is that it can study “attributes that are commonly possessed” and “attributes that are commonly not possessed” of the objects in the formal context at the same time. It is well known that the new formal context generated by attribute clustering has a strong connection with the original formal context, and there is a close internal connection between the original three-way concepts and the new three-way concepts obtained by attribute clustering. Therefore, the comparative study and analysis of three-way concepts under attribute clustering were carried out. Firstly, the concepts of pessimistic, optimistic and general attribute clusterings were proposed on the basis of attribute clustering, and the relationship among these three concepts was studied. Moreover, the difference between the original three-way concepts and the new ones was studied by comparing the clustering process with the formal process of three-way concepts. Furthermore, two minimum constraint indexes were put forward from the perspective of object-oriented and attribute-oriented respectively, and the influence of attribute clustering on three-way concept lattice was explored. The above results further enrich the theory of three-way concept analysis and provide feasible ideas for the field of visual data processing.

Table and Figures | Reference | Related Articles | Metrics
Answer selection model based on pooling and feature combination enhanced BERT
Jie HU, Xiaoxi CHEN, Yan ZHANG
Journal of Computer Applications    2023, 43 (2): 365-373.   DOI: 10.11772/j.issn.1001-9081.2021122167
Abstract337)   HTML16)    PDF (1248KB)(159)       Save

Current main stream models cannot fully express the semantics of question and answer pairs, do not fully consider the relationships between the topic information of question and answer pairs, and the activation function has the problem of soft saturation, which affect the overall performance of the model. To solve these problems, an answer selection model based on pooling and feature combination enhanced BERT (Bi-directional Encoder Representations from Transformers) was proposed. Firstly, adversarial samples and pooling operation were introduced to represent the semantics of question and answer pairs based on the pre-training model BERT. Secondly, the relationships between topic information of question and answer pairs were strengthened by the feature combination of topic information. Finally, the activation function in the hidden layer was improved, and the splicing vector was used to complete the answer selection task through the hidden layer and classifier. Model validation was performed on datasets SemEval-2016CQA and SemEval-2017CQA. The results show that compared with tBERT model, the proposed model has the accuracy increased by 3.1 percentage points and 2.2 percentage points respectively, F1 score increased by 2.0 percentage points and 3.1 percentage points respectively. It can be seen that the comprehensive effect of the proposed model on the answer selection task is effectively improved, and both of the accuracy and F1 score of the model are better than those of the model for comparison.

Table and Figures | Reference | Related Articles | Metrics
Remaining useful life prediction method of aero-engine based on optimized hybrid model
Yuefeng LIU, Xiaoyan ZHANG, Wei GUO, Haodong BIAN, Yingjie HE
Journal of Computer Applications    2022, 42 (9): 2960-2968.   DOI: 10.11772/j.issn.1001-9081.2021071343
Abstract258)   HTML13)    PDF (2754KB)(181)       Save

In the Remaining Useful Life (RUL) prediction methods of aero-engine, the data at different time steps are not weighted simultaneously, including the original data and the extracted features, which leads to the problem of low accuracy of RUL prediction.Therefore, an RUL prediction method based on optimized hybrid model was proposed. Firstly, three different paths were chosen to extract features. 1) The mean value and trend coefficient of the original data were input into the fully connected network. 2) The original data were input into Bidirectional Long Short-Term Memory (Bi-LSTM) network, and the attention mechanism was used to process the obtained features. 3) The attention mechanism was used to process the original data, and the weighted features were input into Convolutional Neural Network (CNN) and Bi-LSTM network. Then, the idea of fusing multi-path features for prediction was adopted, the above-mentioned extracted features were fused and input into the fully connected network to obtain the RUL prediction result. Finally, the Company-Modular Aero-Propulsion System Simulation (C-MAPSS) datasets were used to verify the effectiveness of the method. Experimental results show that the proposed method performs well on all the four datasets. Taking FD001 dataset as an example, the Root Mean Square Error (RMSE) of the proposed method is reduced by 9.01% compared to that of Bi-LSTM network.

Table and Figures | Reference | Related Articles | Metrics
Chinese named entity recognition based on knowledge base entity enhanced BERT model
Jie HU, Yan HU, Mengchi LIU, Yan ZHANG
Journal of Computer Applications    2022, 42 (9): 2680-2685.   DOI: 10.11772/j.issn.1001-9081.2021071209
Abstract519)   HTML23)    PDF (1391KB)(476)       Save

Aiming at the problem that the pre-training model BERT (Bidirectional Encoder Representation from Transformers) lacks of vocabulary information, a Chinese named entity recognition model called OpenKG + Entity Enhanced BERT + CRF (Conditional Random Field) based on knowledge base entity enhanced BERT model was proposed on the basis of the semi-supervised entity enhanced minimum mean-square error pre-training model. Firstly, documents were downloaded from Chinese general encyclopedia knowledge base CN-DBPedia and entities were extracted by Jieba Chinese text segmentation to expand entity dictionary. Then, the entities in the dictionary were embedded into BERT for pre-training. And the word vectors obtained from the training were input into Bidirectional Long-Short-Term Memory network (BiLSTM) for feature extraction. Finally, the results were corrected by CRF and output. Model validation was performed on datasets CLUENER 2020 and MSRA, and the proposed model was compared with Entity Enhanced BERT pre-training, BERT+BiLSTM, ERNIE and BiLSTM+CRF models. Experimental results show that compared with these four models, the proposed model has the F1 score increased by 1.63 percentage points and 1.1 percentage points, 3.93 percentage points and 5.35 percentage points, 2.42 percentage points and 4.63 percentage points, 6.79 and 7.55 percentage points, respectively in the two datasets. It can be seen that the comprehensive effect of the proposed model on named entity recognition is effectively improved, and the F1 scores of the model are better than those of the comparison models.

Table and Figures | Reference | Related Articles | Metrics
Lightweight attention mechanism module based on squeeze and excitation
Zhenhu LYU, Xinzheng XU, Fangyan ZHANG
Journal of Computer Applications    2022, 42 (8): 2353-2360.   DOI: 10.11772/j.issn.1001-9081.2021061037
Abstract496)   HTML26)    PDF (1124KB)(156)       Save

Focusing on the issue that embedding the attention mechanism module into Convolutional Neural Network (CNN) to improve the application accuracy will increase the parameters and the computational cost, the lightweight Height Dimensional Squeeze and Excitation (HD-SE) module and Width Dimensional Squeeze and Excitation (WD-SE) module based on squeeze and excitation were proposed. To make full use of the potential information in the feature maps, two kinds of height and width dimensional weight information of feature maps was respectively extracted by HD-SE and WD-SE through squeeze and excitation operations, then the obtained weight information was respectively applied to corresponding tensors of the feature maps of two dimensions to improve the application accuracy of the model. Experiments were implemented on CIFAR10 and CIFAR100 datasets after embedding HD-SE and WD-SE into Visual Geometry Group 16 (VGG16), Residual Network 56 (ResNet56), MobileNetV1 and MobileNetV2 models respectively. Experimental results show fewer parameters and computational cost added by HD-SE and WD-SE to the network models when the models achieve the same or even better accuracy, compared with the state-of-the-art attention mechanism modules, such as Squeeze and Excitation (SE) module, Coordinate Attention (CA) block, Convolutional Block Attention Module (CBAM) and Efficient Channel Attention (ECA) module.

Table and Figures | Reference | Related Articles | Metrics
Data center server energy consumption optimization algorithm combining XGBoost and Multi-GRU
Mingyao SHEN, Meng HAN, Shiyu DU, Rui SUN, Chunyan ZHANG
Journal of Computer Applications    2022, 42 (1): 198-208.   DOI: 10.11772/j.issn.1001-9081.2021071291
Abstract403)   HTML18)    PDF (1169KB)(121)       Save

With the rapid development of cloud computing technology, the number of data centers have increased significantly, and the subsequent energy consumption problem gradually become one of the research hotspots. Aiming at the problem of server energy consumption optimization, a data center server energy consumption optimization combining eXtreme Gradient Boosting (XGBoost) and Multi-Gated Recurrent Unit (Multi-GRU) (ECOXG) algorithm was proposed. Firstly, the data such as resource occupation information and energy consumption of each component of the servers were collected by the Linux terminal monitoring commands and power consumption meters, and the data were preprocessed to obtain the resource utilization rates. Secondly, the resource utilization rates were constructed in series into a time series in vector form, which was used to train the Multi-GRU load prediction model, and the simulated frequency reduction was performed to the servers according to the prediction results to obtain the load data after frequency reduction. Thirdly, the resource utilization rates of the servers were combined with the energy consumption data at the same time to train the XGBoost energy consumption prediction model. Finally, the load data after frequency reduction were input into the trained XGBoost model, and the energy consumption of the servers after frequency reduction was predicted. Experiments on the actual resource utilization data of 6 physical servers showed that ECOXG algorithm had a Root Mean Square Error (RMSE) reduced by 50.9%, 31.0%, 32.7%, 22.9% compared with Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) network, CNN-GRU and CNN-LSTM models, respectively. Meanwhile, compared with LSTM, CNN-GRU and CNN-LSTM models, ECOXG algorithm saved 43.2%, 47.1%, 59.9% training time, respectively. Experimental results show that ECOXG algorithm can provide a theoretical basis for the prediction and optimization of server energy consumption optimization, and it is significantly better than the comparison algorithms in accuracy and operating efficiency. In addition, the power consumption of the server after the simulated frequency reduction is significantly lower than the real power consumption, and the effect of reducing energy consumption is outstanding when the utilization rates of the servers are low.

Table and Figures | Reference | Related Articles | Metrics
Stock index forecasting method based on corporate financial statement data
Jihou WANG, Peiguang LIN, Jiaqian ZHOU, Qingtao LI, Yan ZHANG, Muwei JIAN
Journal of Computer Applications    2021, 41 (12): 3632-3636.   DOI: 10.11772/j.issn.1001-9081.2021061006
Abstract346)   HTML7)    PDF (580KB)(113)       Save

All market activities of stock market participants combine to affect stock market changes, making stock market volatility fraught with complexity and making accurate prediction of stock prices a challenge. Among these activities that affect stock market changes, financial disclosure is an attractive and potentially financially rewarding means of predicting stock indexe changes. In order to deal with the complex changes in the stock market, a method of stock index prediction was proposed that incorporates data from financial statements disclosed by corporates. Firstly, the stock index historical data and corporate financial statement data were preprocessed, and the main task is dimension reduction of the high-dimensional matrix generated from corporate financial statement data, and then the dual-channel Long Short-Term Memory (LSTM) network was used to forecast and research the normalized data. Experimental results on SSE 50 and CSI 300 Index datasets show that the prediction effect of the proposed method is better than that using only historical data of stock indexes.

Table and Figures | Reference | Related Articles | Metrics
Construction of even-variable rotation symmetric Boolean functions with optimum algebraic immunity
CHEN Yindong XIANG Hongyan ZHANG Yanan
Journal of Computer Applications    2014, 34 (2): 444-447.  
Abstract483)      PDF (646KB)(399)       Save
Algebraic immunity is one of the most significant cryptographic properties for Boolean functions. In order to resist algebraic attack, high algebraic immunity is necessary for those Boolean functions used in stream ciphers. This paper constructed more than one even-variable rotation symmetric Boolean functions with optimum algebraic immunity by giving an even n. Based on majority function, some orbits of different hamming weights were chosen, then the values of functions on these orbits were changed. Given a sufficient condition of Boolean functions with optimum algebraic immunity, the new constructed Boolean functions were proved to satisfy the condition. Therefore, it shows the algebraic immunity of the functions is optimum. Thus, algebraic attacks can be resisted effectively.
Related Articles | Metrics
Patch similarity anisotropic diffusion algorithm based on variable exponent for image denoising
DONG Chanchan ZHANG Quan HAO Huiyan ZHANG Fang LIU Yi SUN Weiya GUI Zhiguo
Journal of Computer Applications    2014, 34 (10): 2963-2966.   DOI: 10.11772/j.issn.1001-9081.2014.10.2963
Abstract238)      PDF (815KB)(341)       Save

Concerning the contradiction between edge-preserving and noise-suppressing in the process of image denoising, a patch similarity anisotropic diffusion algorithm based on variable exponent for image denoising was proposed. The algorithm combined adaptive Perona-Malik (PM) model based on variable exponent for image denoising and the idea of patch similarity, constructed a new edge indicator and a new diffusion coefficient function. The traditional anisotropic diffusion algorithms for image denoising based on the intensity similarity of each single pixel (or gradient information) to detect edge cannot effectively preserve weak edges and details such as texture. However, the proposed algorithm can preserve more detail information while removing the noise, since the algorithm utilizes the intensity similarity of neighbor pixels. The simulation results show that, compared with the traditional image denoising algorithms based on Partial Differential Equation (PDE), the proposed algorithm improves Signal-to-Noise ratio (SNR) and Peak-Signal-to-Noise Ratio (PSNR) to 16.602480dB and 31.284672dB respectively, and enhances anti-noise capability. At the same time, the filtered image preserves more detail features such as weak edges and textures and has good visual effects. Therefore, the algorithm achieves a good balance between noise reduction and edge maintenance.

Reference | Related Articles | Metrics
H.264 authentication playing method based on video steganography
CAI Yangyan ZHANG Yu
Journal of Computer Applications    2014, 34 (1): 171-174.   DOI: 10.11772/j.issn.1001-9081.2014.01.0171
Abstract455)      PDF (646KB)(474)       Save
In the multimedia content distribution and playing system, files played were limited without compromising users' experience. Firstly, binary images were selected adaptively and embedded during the intra prediction by modifying AC coefficients of specific locations. Then, the extracted watermark was matched with the binary image selected adaptively. If not matched properly, the video could not continue to be decoded and played. The experimental results demonstrate watermark algorithm has high robustness. After watermarks are embedded, video Peak Signal-to-Noise Ratio (PSNR) and rate almost remain unchanged. The proposed algorithm also has low complexity and strong practicability, and illegal videos can be filtered effectively.
Related Articles | Metrics
Mixed key management scheme based on domain for wireless sensor network
WANG Binbin ZHANG Yanyan ZHANG Xuelin
Journal of Computer Applications    2014, 34 (1): 90-94.   DOI: 10.11772/j.issn.1001-9081.2014.01.0090
Abstract500)      PDF (768KB)(488)       Save
Concerning the existing problems in the current key management strategies, lower connectivity, higher storage consumption and communication cost, this paper proposed a mixed key management scheme based on domain for Wireless Sensor Network (WSN). The scheme divided the deployment area into a number of square areas, which consisted of member nodes and head nodes. According to their pre-distribution key space information, any pair of nodes in the same area could find a session key, but the nodes in different areas could only communicate with each other through head nodes. The eigenvalues and eigenvectors of the multiple asymmetric quadratic form polynomials were computed, and then the orthogonal diagonalization information was got, by which the head nodes could achieve identification and generate the session key between its neighbor nodes. The analysis of performance shows that compared with the existing key management schemes, this scheme has full connectivity and a bigger improvement in terms of communication overhead, storage consumption and safety.
Related Articles | Metrics
Performance evaluation on open source cloud platform for high performance computing
LI` Chunyan ZHANG Xuejie
Journal of Computer Applications    2013, 33 (12): 3580-3585.  
Abstract613)      PDF (940KB)(645)       Save
Cloud computing is a new model of the Internet resource utilization to provide a variety of IT services. It has been widely used in various fields, including High Performance Computing (HPC). However, its virtualization has caused some performance overhead. Meanwhile, there are some differences in virtualization technology of different cloud platforms, so the performance in implementing HPC is different among them. The performance and real workload evaluation of HPC of open source clouds platforms, including Nimbus, OpenNebula and OpenStack, were compared and analyzed by HPC Challenge (HPCC) benchmark suite and NAS Parallel Benchmark (NPB) from CPU, memory, communication, scalability and HPC, respectively. The experimental results show that OpenStack has better performance for computation-intensive high performance applications, thus it is a good selection for implementing HPC applications in open source cloud platform.
Related Articles | Metrics
Surface development of oblique circular cone with SolidWorks re-development
SONG Yan ZHANG Jingjing CHEN Xiaopeng QIAN Qing
Journal of Computer Applications    2013, 33 (04): 1119-1121.   DOI: 10.3724/SP.J.1087.2013.01119
Abstract1216)      PDF (398KB)(559)       Save
Because the unfolded process of three-way conical pipe is complex, the development drawing cannot be completed automatically. This paper introduced a general analytic algorithm about the development of curve oblique based on the thought of symbolic-graphic combination. The equation of two conical intersection line was established by exploring the mathematical model of the skew intersection to the axis of the cone. The intersecting lines in the SolidWorks interface were created by means of VB progamming and were verified through SolidWorks. After setting up flattening curve equation, the development drawing was carried out in the SolidWorks. Thus, the expansion plan of two cones with arbitrary size and skew axises was designed by parametric means on the SolidWorks interface. The unfolding method is reliable and it makes automatic development drawing practical. The results and relevant data also reveal that the algorithm is of fast speed, high precision and strong universality, which can be used for the expansion mapping of a variety of sheet metal parts to facilitate the manufacture of sheet metal parts.
Reference | Related Articles | Metrics
2D bar code recognition marking in metal parts
WANG Cui-yan ZHANG Jian-chao
Journal of Computer Applications    2012, 32 (11): 3210-3213.   DOI: 10.3724/SP.J.1087.2012.03210
Abstract1017)      PDF (732KB)(511)       Save
Direct Part Marking (DPM) technology is an important means to achieve product identification, and 2D bar code technology is one of the keys. DPM logo of this paper made metal as a background and marked 2D with a laser above it. Compared to image recognition in 2D bar code based on print background, the image recognition in 2D bar code based on metal background is more complex. This paper improved the traditional identification methods, integrated use of the largest connected components extraction, the improved Hough transform, the code division based on maximum degree of matching and the gray image based on non-destructive method of information extraction to achieve the rough positioning, precise positioning, calibration, bar code segmentation and data extraction of the bar code images. The results show that this program has a strong anti-interference for the wear, light pollution, distortion and uneven illumination 2D barcode on the metal. Finally, reliable results were obtained.
Reference | Related Articles | Metrics
Data storage method supporting large-scale smart grid
SONG Bao-yan ZHANG Hong-mei WANG Yan LI Qiong
Journal of Computer Applications    2012, 32 (09): 2496-2499.   DOI: 10.3724/SP.J.1087.2012.02496
Abstract950)      PDF (848KB)(538)       Save
Concerning that the monitoring data in large-scale smart grid are massive, real-time and dynamic, a new data storage approach supporting large-scale smart grid based on data-centric was proposed, which is a hierarchical extension scheme for storing massive dynamic data. Firstly, the extended Hash coding method could adjust the number of storage nodes dynamically to avoid data loss of sudden or frequent events and increase system availability. Then, the multi-threshold leveling method was used to distribute data to multiple storage nodes, which could avoid hotspot storage problem and achieve load balance. Simulation results show that this method is able to satisfy the need of massive data storage, to obtain better load balance, to lower the total energy consumption and to extend the life cycle of the whole network.
Reference | Related Articles | Metrics
Identity-based cluster key agreement scheme in Ad Hoc network
LIU Xue-yan ZHANG Qiang WANG Cai-fen
Journal of Computer Applications    2012, 32 (08): 2258-2327.   DOI: 10.3724/SP.J.1087.2012.02258
Abstract1047)      PDF (802KB)(330)       Save
In view of the characteristics of limited energy and dynamic change in Ad Hoc network, an identity-based group key agreement scheme was presented. The topology was in a structure composed by clusters, and allowed the synchronous execution of multi-party key agreement protocols based on pairings. The number of cluster members did not affect the key agreement, and it did not require interactivity during the key agreement. It provided the authentication and dynamics. In addition, the scheme was proved semantics secure under the Decisional Bilinear Diffie-Hellman (DBDH) problem. At last, compared with the previous schemes, the proposed scheme has advantages in terms of negotiation rounds and authentication.
Reference | Related Articles | Metrics
Clustering model based on weighted intuitionistic fuzzy sets
CHANG Yan ZHANG Shi-bin
Journal of Computer Applications    2012, 32 (04): 1070-1073.   DOI: 10.3724/SP.J.1087.2012.01070
Abstract955)      PDF (618KB)(381)       Save
Concerning the limitations of the existing clustering methods based on intuitionistic fuzzy sets, a clustering model called Weighted Intuitionistic Fuzzy Set Model (WIFSCM) was proposed based on weighted intuitionistic fuzzy sets. In this model, the concepts of equivalent sample and weighted intuitionistic fuzzy set were put forward in special feature space, and based on which the objective function of intuitionistic fuzzy clustering algorithm was proposed. Iterative algorithms of clustering center and matrix of membership degree were inferred from the objective function. The density function based on weighted intuitionistic fuzzy sets was defined, and initial clustering center was gotten to reduce iterative times. The experiment of gray image segmentation shows that WIFSCM is effective, and it is faster than IFCM algorithm nearly a hundred times.
Reference | Related Articles | Metrics
Wikipedia-based focused crawling with page segmentation
XIONG Zhong-yang SHI Yan ZHANG Yu-fang
Journal of Computer Applications    2011, 31 (12): 3264-3267.  
Abstract822)      PDF (628KB)(651)       Save
Against shortcomings and limitations of traditional focused crawling methods, a wikipedia-based focused crawling with page segmentation was proposed. It set up topic vector by category tree and topic descriptive document of wikipedia, which described topic; introduced page segmentation after downloading a web page, to filter noise nodes; took block relevance into consideration when computing the priority of candidate links,making up for limited information of anchor text; and validated whether different detailed degree of topic description would effect the performance of focused crawling or not, via changing the size of topic vector space. Experimental results show that this method is effective and scalable, and within a limited degree, the more detailed the topic description, the more related to the topic the collected web pages are.
Related Articles | Metrics
Image segmentation based on grayscale iteration threshold pulse coupled neural network
LI Hai-yan ZHANG Yu-feng SHI Xin-ling CHEN Jian-hua
Journal of Computer Applications    2011, 31 (10): 2753-2756.   DOI: 10.3724/SP.J.1087.2011.02753
Abstract1478)      PDF (653KB)(537)       Save
A new method, called Grayscale Iteration Threshold Pulse Coupled Neural Network (GIT-PCNN), was proposed for image segmentation. The GIT-PCNN reduced the required parameters of conventional PCNN and the exponentially decaying threshold was improved to be related to the grayscale statistics of the original image. When GIT-PCNN was applied to image segmentation, no parameter or iteration time needs to be determined since the segmentation could be completed by one time of PCNN firing process. Therefore, GIT-PCNN did not require specific rule as the iteration stop condition. GIT-PCNN made good use of the grayscale information of the original image and the pulse characteristics of PCNN that the neurons associated with each group of spatially connected pixels with similar intensities tended to pulse together when partitioning images. The experimental results show that GIT-PCNN is better than classical PCNN-based segmentation algorithms on visual evaluation, subjective indices and speed performance.
Related Articles | Metrics
Adaptive background update based on Gaussian mixture model under complex condition
Ming-zhi LI Zhi-qiang MA Yong SHAN Xiao-yan ZHANG
Journal of Computer Applications    2011, 31 (07): 1831-1834.   DOI: 10.3724/SP.J.1087.2011.01831
Abstract1169)      PDF (741KB)(1036)       Save
In view of the background update problems based on Gaussian Mixture Model (GMM), such as sudden illumination change, mutual transformation between target and background, a new background auto-adapted update algorithm was proposed in this paper. First, the algorithm distinguished whether the sudden change of illumination occurred according to the current tested target size, and took pertinent updating measures. If there is not sudden change of illumination, the backgrouds of backgroud region and target region were updated seperately. The update of the target region was mainly discussed, and the method that adjusted the background update rate of the target region according to the targets characteristic parameters, such as size, velocity of movement and match times was proposed. The simulation results show that the algorithm not only guarantees the integrity of target detection, but also improves the adaptation of model to background changes.
Reference | Related Articles | Metrics
Environmental perception and the adaptive research in moving object detection
Yan ZHANG Ji-chang GUO Chen WANG
Journal of Computer Applications    2011, 31 (07): 1827-1830.  
Abstract1141)      PDF (640KB)(894)       Save
In complicated environment, any changes will influence the accuracy of the object detection. Therefore, an algorithm was put forward, which combined the Generalized Gaussian Mixture Model (GGMM) and background subtraction to detect moving objects. The model has a flexibility to perceive environment and model the video background adaptively in the presence of environmental changes (such as radial gradient, background disturbance, shadows and noise). And when it has sudden illumination change,the model can resolve it quickly. In order to meet the realtime requirement, this algorithm adopted the principle, to update every other two frames. The experiments show that it can meet the realtime requirement and detect the moving object accurately.
Reference | Related Articles | Metrics
Deep Web query interface identification approach based on label coding
WANG Yan SONG Bao-yan ZHANG Jia-yang ZHANG Hong-mei LI Xiao-guang
Journal of Computer Applications    2011, 31 (05): 1351-1354.   DOI: 10.3724/SP.J.1087.2011.01351
Abstract1038)      PDF (598KB)(852)       Save
In this paper, concerning the complexity of calculation, maintenance and matching ambiguity, a Deep Web query interface identification approach based on label coding was proposed after studying the current identification approach of query interface thoroughly. This approach coded and grouped labels by the directivity and the irregularity of arrangement of the query interface. The identification approach of simple attributes and composite attributes and the processing approach of isolated texts were proposed, taking each label group as an independent unit to identify the feature information. The texts matching the elements were determined by the constraints on the label subscript, which greatly reduced the number of texts considered in matching an element and avoided the problem of matching ambiguity caused by massive heuristic algorithm, and the presentation of nested information was solved by twice clustering effectively and efficiently.
Related Articles | Metrics
New scheme of ID-based authenticated multi-party key agreement
LIU Xue-yan ZHANG Qiang WANG Cai-fen
Journal of Computer Applications    2011, 31 (05): 1302-1304.   DOI: 10.3724/SP.J.1087.2011.01302
Abstract1322)      PDF (433KB)(857)       Save
Authenticated key agreement protocol allows a group of users in an open network environment to identify each other and share a security session key. This article proposed a new scheme of ID-based authenticated multi-party key agreement based on McCullagh-Barreto scheme. Key seed was introduced to update temporary public/private key pairs. The new scheme is able to realize the authentication, improve the security, resist Reveal query attack and the key compromise impersonation attack successfully, and it has many properties such as non-key control and equal contribution.
Related Articles | Metrics
AES and its software implementation based on ARM920T
BAI Ru-xue LIU Hong-yan ZHANG Xin-he
Journal of Computer Applications    2011, 31 (05): 1295-1297.   DOI: 10.3724/SP.J.1087.2011.01295
Abstract1354)      PDF (553KB)(864)       Save
To improve the efficiency of Advanced Encryption Standard (AES) algorithm on ARM processor, an optimization of AES was introduced and realized on ARM920T processor. One-time key expansion was adopted. In the algorithm, SubBytes() and MixColumns() were defined as T-table to store, which could increase the speed. The proposed algorithm programmed by C language was simulated and debugged on the ARM Develop v1.2 platform. Different implementations were compared in storage space and processing speed and a variety of different key length algorithm performances were given. The experimental results show that the execution speed of the presented algorithm improves significantly.
Related Articles | Metrics
Maximum a posteriori classification method based on kernel method under t distribution
Ru-yan ZHANG Shi-tong WANG Yao XU
Journal of Computer Applications    2011, 31 (04): 1079-1083.   DOI: 10.3724/SP.J.1087.2011.01079
Abstract1552)      PDF (738KB)(411)       Save
In order to solve the problem of multivariate normal distribution failing to comply with the distribution of sample data when it has a serious tailing phenomenon, a multiclass classification method under t distribution was proposed. Sample data were extended to the high dimensional feature space by kernel method and Maximum A Posteriori (MAP) was obtained by Bayesian classifier, and then classification result could be gotten. Because it has another degree of freedom parameter v in multivariate t distribution, it can more easily capture the complexity of the sample data, and enjoys better robustness. A large number of experiments have been done on the five international standard UCI data sets and three facial image data sets, and the experimental results show that the method achieves better classification and is feasible.
Related Articles | Metrics
Shared-memory parallel multi-target tracking
Xiao-gang WANG Xiao-juan WU Xin ZHOU Xiao-yan ZHANG
Journal of Computer Applications   
Abstract1516)      PDF (663KB)(976)       Save
The application of particle filtering in real video-based multi-target tracking systems is limited because of its high computational complexity. To overcome the efficiency bottleneck, a coarse-grained parallel multi-target tracking implementation based on the OpenMP-specified shared-memory parallel programming model was explored. A list of tracked targets was maintained in a shared variable, and each target was tracked by an independent particle filter. The number of threads and the number of targets tracked by each thread were determined by the number of processing units. Compared to its corresponding optimized sequential version, the parallel implementation, which increases the number of targets in real-time tracking from 2 to 8, is of much more practical value.
Related Articles | Metrics